157 research outputs found

    Verfahrenswahl bei Risiko

    Get PDF
    Zusammenfassung: Dieser Beitrag untersucht das Verfahrenswahlproblem bei Risiko. Wir zeigen, dass die unter Sicherheit abgeleiteten Entscheidungsregeln bei unsicherer Leistungsmenge nur dann gültig bleiben, wenn ein risikoneutraler Entscheider unterstellt wird. Bei Risikoaversion hängt die Lösung des Verfahrenswahlproblems von der unterstellten Zielgrösse ab. Bei kostenbasierter Betrachtungsweise erfolgt der Übergang auf Verhahren mit höheren Fixkosten und geringeren variablen Stückkosten früher als bei Sicherheit, bei deckungsbeitragsorientierter Betrachtungsweise dagegen später. Es kann auch vorkommen, dass Verfahren, die unter Risikoneutralität effizient sind, bei Risikoaversion von anderen Verfahren dominiert werden und umgekehrt. Dabei ist jedoch zu beachten, dass eine Verfahrenswahlentscheidung auf Basis einer isolierten Betrachtung des Kostenrisikos nur dann zulässig ist, wenn der Nutzen unabhängig von der Leistungsmenge ist. Schliesslich zeigen wir im Rahmen eines Agency-Modells, dass das Verfahrenswahlproblem trotz Risikoneutralität des Prinzipals nicht unabhängig von der Lösung des Anreizproblems gelöst werden kann, da andernfalls die Break-Even-Menge für den Verfahrensübergang überschätzt wird. Eine optimale Lösung des Verfahrenswahlproblems erfordert daher nicht nur eine Berücksichtigung der Risikoneigung des Entscheiders, sondern auch eine korrekte Berücksichtigung der übrigen Rahmenbedingungen des Entscheidungsproblem

    JOINT_FORCES : unite competing sentiment classifiers with random forest

    Get PDF
    In this paper, we describe how we created a meta-classifier to detect the message-level sentiment of tweets. We participated in SemEval-2014 Task 9B by combining the results of several existing classifiers using a random forest. The results of 5 other teams from the competition as well as from 7 general purpose commercial classifiers were used to train the algorithm. This way, we were able to get a boost of up to 3.24 F1 score points

    Homo Novus

    Get PDF
    Based on current and projected breakthroughs in biological, genetic, and digital technologies—and their possible convergences—contemporary transhumanism confronts the Christian faith with the question: can finite beings be saved from suffering, illness and death? Transhumanists emphatically embrace this possibility as they offer their concrete visions of a future self-redemption through science, medicine, and technology. Transhumanism aims to take control of the evolutionary process and to steer it into a better future for humanity, or rather, their artificial successors. This book is a comprehensive and constructive critique of the transhumanist agenda and its underlying sociotechnical imaginary, worldview, and anthropology. For this task, it draws on theological resources of Christian tradition(s) in novel ways that serve to render the Christian faith plausible in a digital age. In developing a theology that explores the creative potential of “perfected finitude” (Vollendlichkeit) from an eschatological perspective, it contributes to a “theology of technology”.Das transhumanistische Anliegen, den Menschen in physischer und psychischer Hinsicht zu verbessern, hat eine lange Geschichte. Neu in der Gegenwart sind die Gestaltungspotentiale und Handlungsspielräume, die durch biologische, genetische und digitale Technologien eröffnet werden. Sie nötigen den Menschen zur Entscheidung: Wie kann, soll und will er sich als der „neue Mensch“ (homo novus) in Zukunft bestimmen (lassen)? In der Auseinandersetzung mit dieser Frage werden die Anliegen des Transhumanismus aus der Perspektive des christlichen Glaubens konstruktiv und kritisch diskutiert und mit einer zeit- gemäßen Techniktheologie konfrontiert, welche die Potentiale einer eschato- logischen „Vollendlichkeit“ von Mensch und Schöpfung entfaltet

    Mangelhafte Maschinen. Der Mensch im Transhumanismus

    Get PDF

    Meaning, Form and the Limits of Natural Language Processing

    Get PDF
    This article engages the anthropological assumptions underlying the apprehensions and promises associated with language in artificial intelligence (AI). First, we present the contours of two rivalling paradigms for assessing artificial language generation: a holistic-enactivist theory of language and an informational theory of language. We then introduce two language generation models – one presently in use and one more speculative: Firstly, the transformer architecture as used in current large language models, such as the GPT-series, and secondly, a model for 'autonomous machine intelligence' recently proposed by Yann LeCun, which involves not only language but a sensory-motor interaction with the world. We then assess the language capacity of these models from the perspectives of the two rivalling language paradigms. Taking a holistic-enactivist stance, we then argue that there is currently no reason to assume a human-comparable language capacity in LLMs and, further, that LeCun's proposed model does not represent a significant step toward artificially generating human language because it still lacks essential features that underlie the linguistic capacity of humans. Finally, we suggest that proponents of these rivalling interpretations of LLMs should enter into a constructive dialogue and that this dialogue should continuously involve further empirical, conceptual, and theoretical research

    Speaker identification and clustering using convolutional neural networks

    Get PDF
    Deep learning, especially in the form of convolutional neural networks (CNNs), has triggered substantial improvements in computer vision and related fields in recent years. This progress is attributed to the shift from designing features and subsequent individual sub-systems towards learning features and recognition systems end to end from nearly unprocessed data. For speaker clustering, however, it is still common to use handcrafted processing chains such as MFCC features and GMM-based models. In this paper, we use simple spectrograms as input to a CNN and study the optimal design of those networks for speaker identification and clustering. Furthermore, we elaborate on the question how to transfer a network, trained for speaker identification, to speaker clustering. We demonstrate our approach on the well known TIMIT dataset, achieving results comparable with the state of the art – without the need for handcrafted features

    Homo Novus

    Get PDF
    Based on current and projected breakthroughs in biological, genetic, and digital technologies—and their possible convergences—contemporary transhumanism confronts the Christian faith with the question: can finite beings be saved from suffering, illness and death? Transhumanists emphatically embrace this possibility as they offer their concrete visions of a future self-redemption through science, medicine, and technology. Transhumanism aims to take control of the evolutionary process and to steer it into a better future for humanity, or rather, their artificial successors. This book is a comprehensive and constructive critique of the transhumanist agenda and its underlying sociotechnical imaginary, worldview, and anthropology. For this task, it draws on theological resources of Christian tradition(s) in novel ways that serve to render the Christian faith plausible in a digital age. In developing a theology that explores the creative potential of “perfected finitude” (Vollendlichkeit) from an eschatological perspective, it contributes to a “theology of technology”

    Bayesian Semi-structured Subspace Inference

    Full text link
    Semi-structured regression models enable the joint modeling of interpretable structured and complex unstructured feature effects. The structured model part is inspired by statistical models and can be used to infer the input-output relationship for features of particular importance. The complex unstructured part defines an arbitrary deep neural network and thereby provides enough flexibility to achieve competitive prediction performance. While these models can also account for aleatoric uncertainty, there is still a lack of work on accounting for epistemic uncertainty. In this paper, we address this problem by presenting a Bayesian approximation for semi-structured regression models using subspace inference. To this end, we extend subspace inference for joint posterior sampling from a full parameter space for structured effects and a subspace for unstructured effects. Apart from this hybrid sampling scheme, our method allows for tunable complexity of the subspace and can capture multiple minima in the loss landscape. Numerical experiments validate our approach's efficacy in recovering structured effect parameter posteriors in semi-structured models and approaching the full-space posterior distribution of MCMC for increasing subspace dimension. Further, our approach exhibits competitive predictive performance across simulated and real-world datasets.Comment: Accepted at AISTATS 202
    • …
    corecore